614 research outputs found

    Learning neural codes for perceptual uncertainty

    Get PDF
    Perception is an inferential process, in which the state of the immediate environment must be estimated from sensory input. Inference in the face of noise and ambiguity requires reasoning with uncertainty, and much animal behaviour appears close to Bayes optimal. This observation has inspired hypotheses for how the activity of neurons in the brain might represent the distributional beliefs necessary to implement explicit Bayesian computation. While previous work has focused on the sufficiency of these hypothesised codes for computation, relatively little consideration has been given to optimality in the representation itself. Here, we adopt an encoder-decoder approach to study representational optimisation within one hypothesised belief encoding framework: the distributed distributional code (DDC). We consider a setting in which typical belief distribution functions take the form of a sparse combination of an underlying set of basis functions, and the corresponding DDC signals are corrupted by neural variability. We estimate the conditional entropy over beliefs induced by these DDC signals using an appropriate decoder. Like other hypothesised frameworks, a DDC representation of a belief depends on a set of fixed encoding functions that are usually set arbitrarily. Our approach allows us to seek the encoding functions that minimise the decoder conditional entropy and thus optimise representational accuracy in an information theoretic sense. We apply the approach to show how optimal encoding properties may adapt to represent beliefs in new environments, relating the results to experimentally reported neural responses

    Convolutional higher order matching pursuit

    Get PDF
    We introduce a greedy generalised convolutional algorithm to efficiently locate an unknown number of sources in a series of (possibly multidimensional) images, where each source contributes a localised and low-dimensional but otherwise variable signal to its immediate spatial neighbourhood. Our approach extends convolutional matching pursuit in two ways: first, it takes the signal generated by each source to be a variable linear combination of aligned dictionary elements; and second, it executes the pursuit in the domain of high-order multivariate cumulant statistics. The resulting algorithm adapts to varying signal and noise distributions to flexibly recover source signals in a variety of settings

    A neurally plausible model for online recognition and postdiction

    Get PDF
    Humans and other animals are frequently near-optimal in their ability to integrate noisy and ambiguous sensory data to form robust percepts---which are informed both by sensory evidence and by prior expectations about the structure of the environment. It is suggested that the brain does so using the statistical structure provided by an internal model of how latent, causal factors produce the observed patterns. In dynamic environments, such integration often takes the form of \emph{postdiction}, wherein later sensory evidence affects inferences about earlier percepts. As the brain must operate in current time, without the luxury of acausal propagation of information, how does such postdictive inference come about? Here, we propose a general framework for neural probabilistic inference in dynamic models based on the distributed distributional code (DDC) representation of uncertainty, naturally extending the underlying encoding to incorporate implicit probabilistic beliefs about both present and past. We show that, as in other uses of the DDC, an inferential model can be learnt efficiently using samples from an internal model of the world. Applied to stimuli used in the context of psychophysics experiments, the framework provides an online and plausible mechanism for inference, including postdictive effects

    Dynamics on the manifold: Identifying computational dynamical activity from neural population recordings

    Get PDF
    The question of how the collective activity of neural populations gives rise to complex behaviour is fundamental to neuroscience. At the core of this question lie considerations about how neural circuits can perform computations that enable sensory perception, decision making, and motor control. It is thought that such computations are implemented through the dynamical evolution of distributed activity in recurrent circuits. Thus, identifying dynamical structure in neural population activity is a key challenge towards a better understanding of neural computation. At the same time, interpreting this structure in light of the computation of interest is essential for linking the time-varying activity patterns of the neural population to ongoing computational processes. Here, we review methods that aim to quantify structure in neural population recordings through a dynamical system defined in a low-dimensional latent variable space. We discuss advantages and limitations of different modelling approaches and address future challenges for the field

    Temporal structure in neuronal activity during working memory in Macaque parietal cortex

    Full text link
    A number of cortical structures are reported to have elevated single unit firing rates sustained throughout the memory period of a working memory task. How the nervous system forms and maintains these memories is unknown but reverberating neuronal network activity is thought to be important. We studied the temporal structure of single unit (SU) activity and simultaneously recorded local field potential (LFP) activity from area LIP in the inferior parietal lobe of two awake macaques during a memory-saccade task. Using multitaper techniques for spectral analysis, which play an important role in obtaining the present results, we find elevations in spectral power in a 50--90 Hz (gamma) frequency band during the memory period in both SU and LFP activity. The activity is tuned to the direction of the saccade providing evidence for temporal structure that codes for movement plans during working memory. We also find SU and LFP activity are coherent during the memory period in the 50--90 Hz gamma band and no consistent relation is present during simple fixation. Finally, we find organized LFP activity in a 15--25 Hz frequency band that may be related to movement execution and preparatory aspects of the task. Neuronal activity could be used to control a neural prosthesis but SU activity can be hard to isolate with cortical implants. As the LFP is easier to acquire than SU activity, our finding of rich temporal structure in LFP activity related to movement planning and execution may accelerate the development of this medical application.Comment: Originally submitted to the neuro-sys archive which was never publicly announced (was 0005002

    Scalable transformed additive signal decomposition by non-conjugate Gaussian process inference

    Get PDF
    Many functions and signals of interest are formed by the addition of multiple underlying components, often nonlinearly transformed and modified by noise. Examples may be found in the literature on Generalized Additive Models [1] and Underdetermined Source Separation [2] or other mode decomposition techniques. Recovery of the underlying component processes often depends on finding and exploiting statistical regularities within them. Gaussian Processes (GPs) [3] have become the dominant way to model statistical expectations over functions. Recent advances make inference of the GP posterior efficient for large scale datasets and arbitrary likelihoods [4,5]. Here we extend these methods to the additive GP case [6, 7], thus achieving scalable marginal posterior inference over each latent function in settings such as those above

    Score-matching estimators for continuous-time point-process regression models

    Get PDF
    We introduce a new class of efficient estimators based on score matching for probabilistic point process models. Unlike discretised likelihood-based estimators, score matching estimators operate on continuous-time data, with computational demands that grow with the number of events rather than with total observation time. Furthermore, estimators for many common regression models can be obtained in closed form, rather than by iteration. This new approach to estimation may thus expand the range of tractable models available for event-based data

    Residual dynamics resolves recurrent contributions to neural computation

    Get PDF
    Relating neural activity to behavior requires an understanding of how neural computations arise from the coordinated dynamics of distributed, recurrently connected neural populations. However, inferring the nature of recurrent dynamics from partial recordings of a neural circuit presents considerable challenges. Here we show that some of these challenges can be overcome by a fine-grained analysis of the dynamics of neural residuals—that is, trial-by-trial variability around the mean neural population trajectory for a given task condition. Residual dynamics in macaque prefrontal cortex (PFC) in a saccade-based perceptual decision-making task reveals recurrent dynamics that is time dependent, but consistently stable, and suggests that pronounced rotational structure in PFC trajectories during saccades is driven by inputs from upstream areas. The properties of residual dynamics restrict the possible contributions of PFC to decision-making and saccade generation and suggest a path toward fully characterizing distributed neural computations with large-scale neural recordings and targeted causal perturbations

    The Impact of Anesthetic State on Spike-Sorting Success in the Cortex: A Comparison of Ketamine and Urethane Anesthesia

    Get PDF
    Spike sorting is an essential first step in most analyses of extracellular in vivo electrophysiological recordings. Here we show that spike-sorting success depends critically on characteristics of coordinated population activity that can differ between anesthetic states. In tetrode recordings from mouse auditory cortex, spike sorting was significantly less successful under ketamine/medetomidine (ket/med) than urethane anesthesia. Surprisingly, this difficulty with sorting under ket/med anesthesia did not appear to result from either greater millisecond-scale burstiness of neural activity or increased coordination of activity among neighboring neurons. Rather, the key factor affecting sorting success appeared to be the amount of coordinated population activity at long time intervals and across large cortical distances. We propose that spike-sorting success is directly dependent on overall coordination of activity, and is most disrupted by large-scale fluctuations in cortical population activity. Reliability of single-unit recording may therefore differ not only between urethane-anesthetized and ket/med-anesthetized states as demonstrated here, but also between synchronized and desynchronized states, asleep and awake states, or inattentive and attentive states in unanesthetized animals
    • …
    corecore